167 research outputs found

    Greed is Good: Algorithmic Results for Sparse Approximation

    Full text link

    On the linear independence of spikes and sines

    Get PDF
    The purpose of this work is to survey what is known about the linear independence of spikes and sines. The paper provides new results for the case where the locations of the spikes and the frequencies of the sines are chosen at random. This problem is equivalent to studying the spectral norm of a random submatrix drawn from the discrete Fourier transform matrix. The proof involves depends on an extrapolation argument of Bourgain and Tzafriri.Comment: 16 pages, 4 figures. Revision with new proof of major theorem

    Finite-Step Algorithms for Constructing Optimal CDMA Signature Sequences

    Full text link

    Designing structured tight frames via an alternating projection method

    Full text link

    User-friendly tail bounds for sums of random matrices

    Get PDF
    This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.Comment: Current paper is the version of record. The material on Freedman's inequality has been moved to a separate note; other martingale bounds are described in Caltech ACM Report 2011-0

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via â„“1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact â„“1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect â„“1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the â„“1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse â„“1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    A New Approach to Sparse Image Representation Using MMV and K-SVD

    Get PDF
    This paper addresses the problem of image representation based on a sparse decomposition over a learned dictionary. We propose an improved matching pursuit algorithm for Multiple Measurement Vectors (MMV) and an adaptive algorithm for dictionary learning based on multi-Singular Value Decomposition (SVD), and combine them for image representation. Compared with the traditional K-SVD and orthogonal matching pursuit MMV (OMPMMV) methods, the proposed method runs faster and achieves a higher overall reconstruction accuracy

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    • …
    corecore